5 research outputs found
Opening the Software Engineering Toolbox for the Assessment of Trustworthy AI
Trustworthiness is a central requirement for the acceptance and success of human-centered artificial intelligence (AI). To deem an AI system as trustworthy, it is crucial to assess its behaviour and characteristics against a gold standard of Trustworthy AI, consisting of guidelines, requirements, or only expectations. While AI systems are highly complex, their implementations are still based on software. The software engineering community has a long established toolbox for the assessment of software systems, especially in the context of software testing. In this paper, we argue for the application of software engineering and testing practices for the assessment of trustworthy AI. We make the connection between the seven key requirements as defined by the European Commission’s AI high-level expert group and established procedures from software engineering and raise questions for future work.publishedVersio
Evaluating the Robustness of Deep Reinforcement Learning for Autonomous and Adversarial Policies in a Multi-agent Urban Driving Environment
Deep reinforcement learning is actively used for training autonomous and
adversarial car policies in a simulated driving environment. Due to the large
availability of various reinforcement learning algorithms and the lack of their
systematic comparison across different driving scenarios, we are unsure of
which ones are more effective for training and testing autonomous car software
in single-agent as well as multi-agent driving environments. A benchmarking
framework for the comparison of deep reinforcement learning in a vision-based
autonomous driving will open up the possibilities for training better
autonomous car driving policies. Furthermore, autonomous cars trained on deep
reinforcement learning-based algorithms are known for being vulnerable to
adversarial attacks. To guard against adversarial attacks, we can train
autonomous cars on adversarial driving policies. However, we lack the knowledge
of which deep reinforcement learning algorithms would act as good adversarial
agents able to effectively test autonomous cars. To address these challenges,
we provide an open and reusable benchmarking framework for systematic
evaluation and comparative analysis of deep reinforcement learning algorithms
for autonomous and adversarial driving in a single- and multi-agent
environment. Using the framework, we perform a comparative study of five
discrete and two continuous action space deep reinforcement learning
algorithms. We run the experiments in a vision-only high fidelity urban driving
simulated environments. The results indicate that only some of the deep
reinforcement learning algorithms perform consistently better across single and
multi-agent scenarios when trained in a multi-agent-only setting
Adversarial Deep Reinforcement Learning for Improving the Robustness of Multi-agent Autonomous Driving Policies
Autonomous cars are well known for being vulnerable to adversarial attacks
that can compromise the safety of the car and pose danger to other road users.
To effectively defend against adversaries, it is required to not only test
autonomous cars for finding driving errors, but to improve the robustness of
the cars to these errors. To this end, in this paper, we propose a two-step
methodology for autonomous cars that consists of (i) finding failure states in
autonomous cars by training the adversarial driving agent, and (ii) improving
the robustness of autonomous cars by retraining them with effective adversarial
inputs. Our methodology supports testing ACs in a multi-agent environment,
where we train and compare adversarial car policy on two custom reward
functions to test the driving control decision of autonomous cars. We run
experiments in a vision-based high fidelity urban driving simulated
environment. Our results show that adversarial testing can be used for finding
erroneous autonomous driving behavior, followed by adversarial training for
improving the robustness of deep reinforcement learning based autonomous
driving policies. We demonstrate that the autonomous cars retrained using the
effective adversarial inputs noticeably increase the performance of their
driving policies in terms of reduced collision and offroad steering errors